Goto

Collaborating Authors

 defensive capability


750,000 apocalypse SUV comes with its own gas mask

FOX News

Vehicle's designer is known for his work in video game vehicle design. Rezvani Motors, an innovative American automotive manufacturer, has redefined the luxury SUV market with its extraordinary Vengeance. This vehicle represents a groundbreaking fusion of military-inspired design and high-end luxury transportation. Designed by digital artist Milen Ivanov, known for his work in video game vehicle design, the Vengeance breaks conventional automotive boundaries with its aggressive styling and comprehensive security features. GET EXPERT SECURITY ALERTS, MUST-KNOW TECH TIPS, AND THE LATEST DIGITAL TRENDS -- STRAIGHT TO YOUR INBOX.


Autonomous Network Defence using Reinforcement Learning

Foley, Myles, Hicks, Chris, Highnam, Kate, Mavroudis, Vasilios

arXiv.org Artificial Intelligence

In the network security arms race, the defender is significantly disadvantaged as they need to successfully detect and counter every malicious attack. In contrast, the attacker needs to succeed only once. To level the playing field, we investigate the effectiveness of autonomous agents in a realistic network defence scenario. We first outline the problem, provide the background on reinforcement learning and detail our proposed agent design. Using a network environment simulation, with 13 hosts spanning 3 subnets, we train a novel reinforcement learning agent and show that it can reliably defend continual attacks by two advanced persistent threat (APT) red agents: one with complete knowledge of the network layout and another which must discover resources through exploration but is more general.


Eraser: Jailbreaking Defense in Large Language Models via Unlearning Harmful Knowledge

Lu, Weikai, Zeng, Ziqian, Wang, Jianwei, Lu, Zhengdong, Chen, Zelin, Zhuang, Huiping, Chen, Cen

arXiv.org Artificial Intelligence

Jailbreaking attacks can enable Large Language Models (LLMs) to bypass the safeguard and generate harmful content. Existing jailbreaking defense methods have failed to address the fundamental issue that harmful knowledge resides within the model, leading to potential jailbreak risks for LLMs. In this paper, we propose a novel defense method called Eraser, which mainly includes three goals: unlearning harmful knowledge, retaining general knowledge, and maintaining safety alignment. The intuition is that if an LLM forgets the specific knowledge required to answer a harmful question, it will no longer have the ability to answer harmful questions. The training of Erase does not actually require the model's own harmful knowledge, and it can benefit from unlearning general answers related to harmful queries, which means it does not need assistance from the red team. The experimental results show that Eraser can significantly reduce the jailbreaking success rate for various attacks without compromising the general capabilities of the model. Our codes are available at https://github.com/ZeroNLP/Eraser.


Senators leave classified AI briefing confident but wary of 'existential' threat posed by China

FOX News

Fox News contributor Dr. Marc Siegel weighs in on how artificial intelligence can change the patient-doctor relationship on'America's Newsroom.' Senators left a classified briefing on artificial intelligence Tuesday with a deeper understanding of how AI is already being used to bolster U.S. national security and the looming threat China poses as it deploys its own AI capabilities. "I think, from a military perspective, it's very existential because China's playing for keeps," Sen. Eric Schmitt, R-Mo., told Fox News Digital after the closed-door session. So, it's moving quickly, but I think the best we can do right now is get a firm understanding." Tuesday afternoon's briefing was the first-ever classified meeting with senators and key Pentagon officials about AI. Discussion included how the U.S. is using AI to maintain its national security edge and how adversaries like China are using this emerging tool. Senate Majority Leader Chuck Schumer, D-N.Y., told reporters what he learned was "eye-opening." It comes after he told senators in a letter over the weekend that Congress is moving full steam ahead on his AI regulatory framework, which Schumer said Tuesday could take months to develop. HOW AI HAS SHAPED A VITAL NATO ALLY'S PRESIDENTIAL ELECTION Senate Majority Leader Chuck Schumer, D-N.Y., speaks to reporters after a classified Senate briefing on artificial intelligence at the U.S. Capitol July 11, 2023, in Washington, D.C. (Drew Angerer/Getty Images) "This briefing shows just depth, complexity, but necessity of getting something real done.


Cybersecurity trends: Looking over the horizon

#artificialintelligence

Cybersecurity has always been a never-ending race, but the rate of change is accelerating. Companies are continuing to invest in technology to run their businesses. Now, they are layering more systems into their IT networks to support remote work, enhance the customer experience, and generate value, all of which creates potential new vulnerabilities. This article is a collaborative effort by Jim Boehm, Dennis Dias, Charlie Lewis, Kathleen Li, and Daniel Wallance, representing views from McKinsey's Risk & Resilience Practice. At the same time, adversaries--no longer limited to individual actors--include highly sophisticated organizations that leverage integrated tools and capabilities with artificial intelligence and machine learning. The scope of the threat is growing, and no organization is immune.


Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix

Shen, Chaomin, Peng, Yaxin, Zhang, Guixu, Fan, Jinsong

arXiv.org Machine Learning

We propose a scheme for defending against adversarial attacks by suppressing the largest eigenvalue of the Fisher information matrix (FIM). Our starting point is one explanation on the rationale of adversarial examples. Based on the idea of the difference between a benign sample and its adversarial example is measured by the Euclidean norm, while the difference between their classification probability densities at the last (softmax) layer of the network could be measured by the Kullback-Leibler (KL) divergence, the explanation shows that the output difference is a quadratic form of the input difference. If the eigenvalue of this quadratic form (a.k.a. FIM) is large, the output difference becomes large even when the input difference is small, which explains the adversarial phenomenon. This makes the adversarial defense possible by controlling the eigenvalues of the FIM. Our solution is adding one term representing the trace of the FIM to the loss function of the original network, as the largest eigenvalue is bounded by the trace. Our defensive scheme is verified by experiments using a variety of common attacking methods on typical deep neural networks, e.g. LeNet, VGG and ResNet, with datasets MNIST, CIFAR-10, and German Traffic Sign Recognition Benchmark (GTSRB). Our new network, after adopting the novel loss function and retraining, has an effective and robust defensive capability, as it decreases the fooling ratio of the generated adversarial examples, and remains the classification accuracy of the original network.